腕骨骨折是医院的常见情况,特别是在紧急服务中。医生需要来自各种医疗设备的图像,以及患者的病史和身体检查,正确诊断这些骨折并采用适当的治疗。本研究旨在使用腕X射线图像的深度学习进行骨折检测,以帮助专门在现场专门的医生,特别是在骨折的诊断中工作。为此目的,使用从Gazi大学医院获得的腕X射线图像数据集的基于深度学习的物体检测模型来执行20个不同的检测程序。这里使用了DCN,动态R_CNN,更快的R_CNN,FSAF,Libra R_CNN,PAA,RetinAnet,Regnet和具有各种骨架的基于SABL深度学习的物体检测模型。为了进一步改进研究中的检测程序,开发了5种不同的集合模型,后来用于改革集合模型,为我们的研究开发一个独一无二的检测模型,标题为腕骨骨折检测组合(WFD_C)。根据检测到总共26种不同的骨折,检测结果的最高结果是WFD_C模型中的0.8639平均精度(AP50)。本研究支持华为土耳其研发中心,范围在持续的合作项目编码071813中,华为大学,华为和Medskor。
translated by 谷歌翻译
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
translated by 谷歌翻译
Data-driven soft sensors are extensively used in industrial and chemical processes to predict hard-to-measure process variables whose real value is difficult to track during routine operations. The regression models used by these sensors often require a large number of labeled examples, yet obtaining the label information can be very expensive given the high time and cost required by quality inspections. In this context, active learning methods can be highly beneficial as they can suggest the most informative labels to query. However, most of the active learning strategies proposed for regression focus on the offline setting. In this work, we adapt some of these approaches to the stream-based scenario and show how they can be used to select the most informative data points. We also demonstrate how to use a semi-supervised architecture based on orthogonal autoencoders to learn salient features in a lower dimensional space. The Tennessee Eastman Process is used to compare the predictive performance of the proposed approaches.
translated by 谷歌翻译
This study uses multisensory data (i.e., color and depth) to recognize human actions in the context of multimodal human-robot interaction. Here we employed the iCub robot to observe the predefined actions of the human partners by using four different tools on 20 objects. We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor that improves, in most cases, recognition accuracy compared to the models trained with a single modality. The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition, including social tasks such as partner-specific adaptation, and contextual behavior understanding, to mention a few.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification.
translated by 谷歌翻译
Facial recognition is fundamental for a wide variety of security systems operating in real-time applications. In video surveillance based face recognition, face images are typically captured over multiple frames in uncontrolled conditions; where head pose, illumination, shadowing, motion blur and focus change over the sequence. We can generalize that the three fundamental operations involved in the facial recognition tasks: face detection, face alignment and face recognition. This study presents comparative benchmark tables for the state-of-art face recognition methods by testing them with same backbone architecture in order to focus only on the face recognition solution instead of network architecture. For this purpose, we constructed a video surveillance dataset of face IDs that has high age variance, intra-class variance (face make-up, beard, etc.) with native surveillance facial imagery data for evaluation. On the other hand, this work discovers the best recognition methods for different conditions like non-masked faces, masked faces, and faces with glasses.
translated by 谷歌翻译
The global Information and Communications Technology (ICT) supply chain is a complex network consisting of all types of participants. It is often formulated as a Social Network to discuss the supply chain network's relations, properties, and development in supply chain management. Information sharing plays a crucial role in improving the efficiency of the supply chain, and datasheets are the most common data format to describe e-component commodities in the ICT supply chain because of human readability. However, with the surging number of electronic documents, it has been far beyond the capacity of human readers, and it is also challenging to process tabular data automatically because of the complex table structures and heterogeneous layouts. Table Structure Recognition (TSR) aims to represent tables with complex structures in a machine-interpretable format so that the tabular data can be processed automatically. In this paper, we formulate TSR as an object detection problem and propose to generate an intuitive representation of a complex table structure to enable structuring of the tabular data related to the commodities. To cope with border-less and small layouts, we propose a cost-sensitive loss function by considering the detection difficulty of each class. Besides, we propose a novel anchor generation method using the character of tables that columns in a table should share an identical height, and rows in a table should share the same width. We implement our proposed method based on Faster-RCNN and achieve 94.79% on mean Average Precision (AP), and consistently improve more than 1.5% AP for different benchmark models.
translated by 谷歌翻译
蜂窝网络(LTE,5G及以后)的增长急剧增长,消费者的需求很高,并且比具有先进的电信技术的其他无线网络更有希望。这些网络的主要目标是将数十亿个设备,系统和用户连接到高速数据传输,高电池容量和低延迟,以及支持广泛的新应用程序,例如虚拟现实,元评估,远程医疗,在线教育,自动驾驶汽车,高级制造等。为了实现这些目标,使用人工智能(AI)方法来实现频谱管理的新方法,以实现这些目标。本文使用基于AI的语义分割模型对光谱传感方法进行了脆弱性分析,以在具有防御性蒸馏方法的情况下识别对抗性攻击下的蜂窝网络信号。结果表明,缓解方法可以显着减少针对对抗攻击的基于AI的光谱传感模型的漏洞。
translated by 谷歌翻译
尽管深度学习对图像/视频恢复和超分辨率产生了重大影响,但到目前为止,学到的去缝隙在学术界或行业中受到了较少的关注。尽管脱位模型是已知和固定的,但它还是非常适合从合成数据监督学习的非常适合监督的。在本文中,我们提出了一个新颖的多场全帧速率Deinterallacing网络,该网络将最新的超级分辨率方法适应了DeinterLacing Task。我们的模型使用可变形的卷积残留块和自我注意力将相邻字段到参考字段(待解剖)的特征对齐。我们广泛的实验结果表明,所提出的方法在数值和感知性能方面提供了最先进的开采结果。在撰写本文时,我们的模型在https://videopersing.ai/benchmarks/deinterlacer.html中排名第一。
translated by 谷歌翻译